314 research outputs found

    Convex Programs for Temporal Verification of Nonlinear Dynamical Systems

    Get PDF
    A methodology for safety verification of continuous and hybrid systems using barrier certificates has been proposed recently. Conditions that must be satisfied by a barrier certificate can be formulated as a convex program, and the feasibility of the program implies system safety in the sense that there is no trajectory starting from a given set of initial states that reaches a given unsafe region. The dual of this problem, i.e., the reachability problem, concerns proving the existence of a trajectory starting from the initial set that reaches another given set. Using insights from the linear programming duality appearing in the discrete shortest path problem, we show in this paper that reachability of continuous systems can also be verified through convex programming. Several convex programs for verifying safety and reachability, as well as other temporal properties such as eventuality, avoidance, and their combinations, are formulated. Some examples are provided to illustrate the application of the proposed methods. Finally, we exploit the convexity of our methods to derive a converse theorem for safety verification using barrier certificates

    On feasibility, stability and performance in distributed model predictive control

    Full text link
    In distributed model predictive control (DMPC), where a centralized optimization problem is solved in distributed fashion using dual decomposition, it is important to keep the number of iterations in the solution algorithm, i.e. the amount of communication between subsystems, as small as possible. At the same time, the number of iterations must be enough to give a feasible solution to the optimization problem and to guarantee stability of the closed loop system. In this paper, a stopping condition to the distributed optimization algorithm that guarantees these properties, is presented. The stopping condition is based on two theoretical contributions. First, since the optimization problem is solved using dual decomposition, standard techniques to prove stability in model predictive control (MPC), i.e. with a terminal cost and a terminal constraint set that involve all state variables, do not apply. For the case without a terminal cost or a terminal constraint set, we present a new method to quantify the control horizon needed to ensure stability and a prespecified performance. Second, the stopping condition is based on a novel adaptive constraint tightening approach. Using this adaptive constraint tightening approach, we guarantee that a primal feasible solution to the optimization problem is found and that closed loop stability and performance is obtained. Numerical examples show that the number of iterations needed to guarantee feasibility of the optimization problem, stability and a prespecified performance of the closed-loop system can be reduced significantly using the proposed stopping condition

    On second-order cone positive systems

    Full text link
    Internal positivity offers a computationally cheap certificate for external (input-output) positivity of a linear time-invariant system. However, the drawback with this certificate lies in its realization dependency. Firstly, computing such a realization requires to find a polyhedral cone with a potentially high number of extremal generators that lifts the dimension of the state-space representation, significantly. Secondly, not all externally positive systems posses an internally positive realization. Thirdly, in many typical applications such as controller design, system identification and model order reduction, internal positivity is not preserved. To overcome these drawbacks, we present a tractable sufficient certificate of external positivity based on second-order cones. This certificate does not require any special state-space realization: if it succeeds with a possibly non-minimal realization, then it will do so with any minimal realization. While there exist systems where this certificate is also necessary, we also demonstrate how to construct systems, where both second-order and polyhedral cones as well as other certificates fail. Nonetheless, in contrast to other realization independent certificates, the present one appears to be favourable in terms of applicability and conservatism. Three applications are representatively discussed to underline its potential. We show how the certificate can be used to find externally positive approximations of nearly externally positive systems and demonstrated that this may help to reduce system identification errors. The same algorithm is used then to design state-feedback controllers that provide closed-loop external positivity, a common approach to avoid over- and undershooting of the step response. Lastly, we present modifications to generalized balanced truncation such that external positivity is preserved where our certificate applies

    A converse theorem for density functions

    Get PDF
    It is proved that existence of a density function is both necessary and sufficient for almost global stability in a nonlinear system

    Dynamic Dual Decomposition for Distributed Control

    Get PDF
    We show how dynamic price mechanisms can be used for decomposition and distributed optimization of control systems. A classical method to deal with optimization constraints is Lagrange relaxation, where dual variables are introduced in the optimization objective. When variables of different subproblems are connected by such constraints, the dual variables can be interpreted as prices in a market mechanism serving to achieve mutual agreement between the subproblems. In this paper, the same idea is used for decomposition of optimal control problems, with dynamics in both decision variables and prices. We show how the prices can be used for decentralized verification that a control law or trajectory stays within a prespecified distance from optimality. For example, approximately optimal decentralized controllers can be obtained by using simplified models for decomposition and more accurate local models for control

    Optimizing Positively Dominated Systems

    Get PDF
    It has recently been shown that several classical open problems in linear system theory, such as optimization of decentralized output feedback controllers, can be readily solved for positive systems using linear programming. In particular, optimal solutions can be verified for large-scale systems using computations that scale linearly with the number of interconnections. Hence two fundamental advantages are achieved compared to classical methods for multivariable control: Distributed implementations and scalable computations. This paper extends these ideas to the class of positively dominated systems. The results are illustrated by computation of optimal spring constants for a network of point-masses connected by springs

    Distributed Control of Positive Systems

    Get PDF
    Abstract in UndeterminedStabilization and optimal control is studied for state space systems with nonnegative coefficients (positive systems). In particular, we show that a stabilizing distributed feedback controller, when it exists, can be computed using linear programming. The same methods are also used to minimize the closed loop input-output gain. An example devoted to distributed control of a vehicle platoon is examined
    • …
    corecore